For each test log:
- Stress Wave Velocity
- SED, LED, Length
- Sweep
- Principal
- Orthogonal
- % heartwood
- Large end
- Small end
- Taper and “waist from a quadratic fit to polygonal area” (not sure what that waist parameter is)
- Whorl-i-ness
library(RODBC)
ch = odbcConnect('LS15')
sql = "select
logs.ScionLogNumber,
Forest,
velocity_4900 as SWV,
yardLED as LED,
yardSED as SED,
yardLength as Length,
m_sweep1 as SweepPrincipal,
m_sweep2 as SweepOrthogonal,
L.h as LEheartPercent,
S.h as SEheartPercent,
m_taper as Taper,
m_waist as Waist,
m_whorliness as Whorliness,
stemVelocity as StemSWV
from
logs
left join
(select
ScionLogNumber,
avg(Aheart_mm2/A_mm2*100) as h
from LogEndDigitizations
where logEnd='large' group by ScionLogNumber) as L
on L.ScionLogNumber=logs.ScionLogNumber
left join
(select
ScionLogNumber,
avg(Aheart_mm2/A_mm2*100) as h
from LogEndDigitizations
where logEnd='small' group by ScionLogNumber) as S
on S.ScionLogNumber=logs.ScionLogNumber
"
L = sqlQuery(ch,sql)
#
ch=odbcConnect("KPPSWI","sa","password12")
sql = "select
ScionLogNumber,
recFracGoodSWV*100 as StemGoodLogSWVPercent,
avgLogSWV as StemAvgLogSWV,
stems.stemLength as StemLength,
stems.stemVolume as StemVolume,
stemSED as StemSED,
stemLED as StemLED from Phase2
left join stems on stems.id=Phase2.stemID
where ScionLogNumber is not null"
S = sqlQuery(ch,sql)
#
write.csv(merge(L,S,by="ScionLogNumber"), row.names=FALSE, quote=FALSE, file="/home/harrinjj/Desktop/Dropbox/4stan/logs.csv")
If \(A_i\) is the (polygonal) area at position \(z_i\) along the log length, then taper is \(\frac{a_1 + a_2}{L}\) and waist is \(\frac{a_2}{4L^2}\) where
\[A(z) = a_2 z^2 + a_1 z + a_0\]
is the least-squares best-fit to \((A_i, z_i)\). Whorliness is
\[\sum_i \left(A_i - (a_2 z_i^2 + a_1 z_i + a_0)\right)^2\]
All trial logs were trimmed at the small end to 4.9m. The length included in the dataset is the length before trimming. The SWV is that measured using Hitman on the trimmed log.
Log shape data every 6” along the length (scanner resolution)
- Diameter (polygonal)
- Eliptical shape
- Major diameter
- Minor diameter
- Orientation of major diameter (degrees) relative to a fixed coordinate system
- Ovality
See logshape.py and logshape.csv in dropbox folder. I don’t normally use an ellipse model, so I’ve just coded one up for this job. It’s not been well tested, so if you see anything weird, let me know.
Ovality is \[ A/\bar{A} - 1 \] where \(A\) is the polygonal area and \(\bar{A}\) is the area of a circle whose radius is the mean radius.
library(lattice)
## Warning: closing unused RODBC handle 1
S = read.csv('/home/harrinjj/Desktop/Dropbox/4stan/logshape.csv')
pairs(S[,c('Diameter','EllipseMajor','EllipseMinor')])
xyplot(Diameter ~ DistanceFromLE | as.factor(ScionLogNumber), S, type="l", layout=c(20,5))
xyplot(EllipseAngle ~ DistanceFromLE | as.factor(ScionLogNumber), S, type="l", ylim=180*c(-1,1), layout=c(20,5))
xyplot(Ovality ~ I(EllipseMajor/EllipseMinor), S)
For each piece of lumber sawn from the log
- log id
- Length
- Width
- Thickness
- Warp (signed twist, crook, bow)
- MOE
- Density
- Visual Grade
- Whether inner or outer
- MC at time warp data obtained
library(RODBC)
ch = odbcConnect('LS15')
sql = "select
b.id, s.scantime,
logNumber as ScionLogNumber,
s.length_fromData_mm/1000. as `Length`,
bow.value as `Bow`,
crook.value as `Crook`,
sin(twist.value/180*3.141592654)*100. as `Twist`,
s.weight_kg/(s.volume_cm3/1.e6) as Density,
power(2*s.resonance_Hz*s.length_fromData_mm/1000.,2)*s.weight_kg/(s.volume_cm3/1.e6)/1.e9 as MOE,
c.R as DistanceFromPith,
w.MC
from
LS15.boards as b
left join joe90.scans as s on b.barcode=s.barcode
left join (select scanId, value from joe90.warp where postMethod='Joescan3_rev.107' and measure='bow.xref_mm') as bow on bow.scanId=s.id
left join (select scanId, value from joe90.warp where postMethod='Joescan3_rev.107' and measure='crook.xref_mm') as crook on crook.scanId=s.id
left join (select scanId, value from joe90.warp where postMethod='Joescan3_rev.107' and measure='twist.xref_deg') as twist on twist.scanId=s.id
left join (select boardId, avg(sqrt(x*x+y*y)) as R from logEndBarcodes group by boardId) as c on b.id=c.boardId
left join (select barcode, avg(mc) as MC from boardMCs group by barcode) as w on w.barcode=b.barcode
where
b.logNumber is not null
and s.project='LS15'
and s.scantime<'2014-02-19'
and s.`ignore`<>1
order by b.id, s.scantime
"
write.csv(sqlQuery(ch,sql), row.names=FALSE, quote=FALSE, file="/home/harrinjj/Desktop/Dropbox/4stan/boards.csv")
The board data is from the initial set of measurements done in January and February 2014.
Warp was assessed only for nominal 100x40 boards (some of which had varying amounts of wane).
Visual grade was never assessed. I’ve not labelled boards inner or outer, but have included distance from the pith. Distance from the pith was only assessed on large end, and since all the logs in the study are butt logs, the pith wander near the large end characteristic of such logs probably limits the representativeness of these distances.
Moisture content was measured on only a few boards using an electrical resistance device at 3 locations and 2 pin depths. I’ve only provided an average value.
Data associated with the Stand the log came from (if available)
- Age
- Average DBH (or stem LED) i.e. any way of knowing whether the stem associated with a test log was above or below average size for the stand
Currently, except for the ‘Forest’ variable (included in the log data), we do not have this information, although Timberlands may well be able to pull it from their database. If they do still have it, and are willing to supply it, my guess it will take some effort but not major.
Did you collect any acoustic or shape data on the stem from which the test log was taken – or from any of the upper logs taken from the same stem? If so, some summary statistics would be helpful.
The processing facility we used (KPP) did measure stem velocity, but we’ve since determined that the instrument produces works poorly, with around 30% of the results being essentially random numbers. I’ve included it in the log dataset (StemSWV), but my best advice is to throw this data away without looking at it.
There is also an estimate of stem velocity based on the velocity of all logs recovered for which reasonable log SWV’s were obtained (StemAvgLogSWV). StemGoodLogSWVPercent is the fraction of the stem length represented by logs included in the StemAvgLogSWV computation. I’ve also included a few other stem parameters (StemSED, StemLED, StemVolume). Remember that the log shape data is actually collected as stem shape. So far I’ve not looked at all at relationships between overall stem shape and board properties, just log shape.
I would also like to see the end photos if you have a convenient way of sending that large data set.
I’ve uploaded the log end imagery collected in the yard into the dropbox logend folder.
library(RODBC)
ch = odbcConnect('LS15')
sql = "select * from LogEndImages where imageType='yard'"
I = sqlQuery(ch, sql)
for (i in 1:nrow(I)) {
cmd = sprintf("cp '/media/Q/SWI/LogsAndStems/Predicting performance of appearance/Phase 2/Results/KPP Trial 20-22Nov13/yard logend photos/%s' /home/harrinjj/Desktop/Dropbox/4stan/logend/%i_%s.jpg", I[i,"filename"], I[i,"ScionLogNumber"], substr(I[i,"logEnd"],1,1))
#print(cmd)
#system(cmd)
}
I would like to know the warp limits for the 7 grade rules you applied
For 100x40 boards the limits used were:
| set | bow.mid (mm) | crook.mid (mm) | twist.tot (mm) |
|---|---|---|---|
| nzs3631 | 85 | 35 | 15 |
| scaled | 42 | 17 | 7 |
| anti.crook | 85 | 10 | 15 |
| japan | 5 | 5 | 5 |
| customer.A | 37 | 10 | 25 |
| wwpa.CSelect | 75 | 25 | 3 |
The seventh limit set (‘scaled.with.moe’) included an MOE limit of 6 GPa, but was otherwise the same as the ‘scaled’ set.
Can you show me how the distributions of the AV and shape parameters of your 100 log sample set compared to the AV and shape parameters of all the butt logs processed during that 3 day sample period?
Logs were selected to cover the observed range, rather than to mimic the, joint or individual, distributions. I think the plot below is the most useful way to get a broad overview of how the selected logs compare with the population. You might disagree, in which case, tell me a bit more about what you want and I’ll see what I can do.
library(RODBC)
library(lattice)
library(hexbin)
ch=odbcConnect('KPPSWI','sa','password12')
L=sqlQuery(ch, "select
ScionLogNumber,
m_sed as SED,
m_waist as Waist,
m_taper as Taper,
m_sweep1 as Sweep1,
m_sweep2 as Sweep2,
m_whorliness as Whorli,
m_ovality as Ovali,
velocity as SWV
from Phase2 where m_nslices is not null and grade like '%P%' and velocity>2200")#" and m_sweep1<10 and m_ovality<0.02")
## Warning: closing unused RODBC handle 4
## Warning: closing unused RODBC handle 3
## Warning: closing unused RODBC handle 2
pvars=c("SED", "Waist", "Taper", "Sweep1", "Sweep2", "Whorli", "Ovali","SWV")
ii=1:nrow(L)
for (p in pvars) {
q=0.05
ii=intersect(ii,c(1:nrow(L))[L[,p]>quantile(L[,p],q,na.rm=TRUE) & L[,p]<quantile(L[,p],1-q,na.rm=TRUE)])
}
splom(L[ii,pvars], panel=function(x, y, ...) {
panel.hexbinplot(x,y)
isel=!is.na(L$ScionLogNumber[ii])
panel.xyplot(x[isel],y[isel],pch=19,col='red',cex=0.3)
},
main="")
Displaying most typical 50.0902144% of 18844 pruned logs in the database.
Red points are the 100 selected logs. Grey scale denotes probability of finding given characteristics in the population (black = high probability).